A brain-inspired algorithm for training highly sparse neural networks
نویسندگان
چکیده
Sparse neural networks attract increasing interest as they exhibit comparable performance to their dense counterparts while being computationally efficient. Pruning the is among most widely used methods obtain a sparse network. Driven by high training cost of such that can be unaffordable for low-resource device, sparsely from scratch has recently gained attention. However, existing algorithms suffer various issues, including poor in sparsity scenarios, computing gradient information during training, or pure random topology search. In this paper, inspired evolution biological brain and Hebbian learning theory, we present new approach evolves according behavior neurons Concretely, exploiting cosine similarity metric measure importance connections, our proposed method, Cosine similarity-based Random Topology Exploration (CTRE), adding important connections network without calculating backward. We carried out different experiments on eight datasets, tabular, image, text demonstrate method outperforms several state-of-the-art extremely large gap. The implementation code available https://github.com/zahraatashgahi/CTRE
منابع مشابه
Training of Feed-Forward Neural Networks for Pattern-Classification Applications Using Music Inspired Algorithm
There have been numerous biologically inspired algorithms used to train feed-forward artificial neural networks such as generic algorithms, particle swarm optimization and ant colony optimization. The Harmony Search (HS) algorithm is a stochastic meta-heuristic that is inspired from the improvisation process of musicians. HS is used as an optimization method and reported to be a competitive alt...
متن کاملDSD: Dense-Sparse-Dense Training for Deep Neural Networks
Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the un...
متن کاملProvable Methods for Training Neural Networks with Sparse Connectivity
We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2022
ISSN: ['0885-6125', '1573-0565']
DOI: https://doi.org/10.1007/s10994-022-06266-w